University
- North America > United States > California (0.14)
- North America > United States > Florida > Leon County > Tallahassee (0.04)
- North America > United States > Florida > Hillsborough County > University (0.04)
- (2 more...)
- North America > United States > New York > New York County > New York City (0.04)
- North America > United States > Florida > Hillsborough County > University (0.04)
- North America > United States > Arizona (0.04)
- (3 more...)
- Information Technology (0.46)
- Materials (0.46)
Supplementary Material for Flat Seeking Bayesian Neural Networks Van-Anh Nguyen 1 Tung-Long Vuong
The proof can be found in Chapter 27 of [6]. For the non-flat version, the update is similar to the mini-batch SGD except that we add small Gaussian noises to the particle models. In Section 4.2 of the main paper, we provide a comprehensive analysis of the performance concerning In the experiments presented in Tables 1 and 2 in the main paper, we train all models for 300 epochs using SGD, with a learning rate of 0.1 and a cosine schedule. For the baseline of the Deep-Ensemble, SGLD, SGVB and SGVB-LRT methods, we reproduce results following the hyper-parameters and processes as our flat versions. ImageNet: This is a large and challenging dataset with 1000 classes.
- Oceania > Australia (0.04)
- North America > United States > Florida > Hillsborough County > University (0.04)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
- Asia > Vietnam (0.04)
- Oceania > Australia (0.04)
- North America > United States > Florida > Hillsborough County > University (0.04)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
- (2 more...)
- Asia > India (0.05)
- South America > Brazil (0.04)
- Africa > Ghana (0.04)
- (7 more...)
- Research Report > New Finding (0.68)
- Research Report > Experimental Study (0.67)
- Health & Medicine > Therapeutic Area (0.69)
- Information Technology (0.67)
- Government > Regional Government (0.67)
- Media > Photography (0.48)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
- Europe > Middle East > Cyprus (0.04)
- North America > United States > Florida > Hillsborough County > University (0.04)
- (3 more...)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Uncertainty > Bayesian Inference (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Statistical Learning (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Learning Graphical Models > Directed Networks > Bayesian Learning (1.00)
A Large-Scale Multimodal Dataset and Benchmarks for Human Activity Scene Understanding and Reasoning
Jiang, Siyang, Yuan, Mu, Ji, Xiang, Yang, Bufang, Liu, Zeyu, Xu, Lilin, Li, Yang, He, Yuting, Dong, Liran, Lu, Wenrui, Yan, Zhenyu, Jiang, Xiaofan, Gao, Wei, Chen, Hongkai, Xing, Guoliang
Multimodal human action recognition (HAR) leverages complementary sensors for activity classification. Beyond recognition, recent advances in large language models (LLMs) enable detailed descriptions and causal reasoning, motivating new tasks: human action understanding (HAU) and human action reasoning (HARn). However, most LLMs, especially large vision language models (LVLMs), struggle with non-RGB modalities such as depth, IMU, and mmWave due to the lack of large-scale data-caption resources. Existing HAR datasets mainly provide coarse data-label annotations, which are insufficient to capture fine-grained action dynamics needed for HAU and HARn. We consider two ground-truth pair types: (1) data label (discrete category) and (2) data caption (textual description). Naively generating captions from labels often lacks logical and spatiotemporal consistency. We introduce CUHK-X, a large-scale multimodal dataset and benchmark suite for HAR, HAU, and HARn. CUHK-X contains 58,445 samples covering 40 actions performed by 30 participants across two indoor environments. To improve caption consistency, we propose a prompt-based scene creation method that leverages LLMs to generate logically connected activity sequences, followed by human validation. CUHK-X includes three benchmarks with six evaluation tasks. Experiments report average accuracies of 76.52% (HAR), 40.76% (HAU), and 70.25% (HARn). CUHK-X aims to enable the community to apply and develop data-intensive learning methods for robust, multimodal human activity analysis. Project page and code: https://openaiotlab.github.io/CUHK-X/ and https://github.com/openaiotlab/CUHK-X.
- North America > United States > Texas (0.04)
- Asia > China > Hong Kong (0.04)
- North America > United States > Pennsylvania > Allegheny County > Pittsburgh (0.04)
- (3 more...)
- Health & Medicine > Therapeutic Area > Neurology (0.46)
- Health & Medicine > Consumer Health (0.46)
ProAgent: Harnessing On-Demand Sensory Contexts for Proactive LLM Agent Systems
Yang, Bufang, Xu, Lilin, Zeng, Liekang, Guo, Yunqi, Jiang, Siyang, Lu, Wenrui, Liu, Kaiwei, Xiang, Hancheng, Jiang, Xiaofan, Xing, Guoliang, Yan, Zhenyu
Large Language Model (LLM) agents are emerging to transform daily life. However, existing LLM agents primarily follow a reactive paradigm, relying on explicit user instructions to initiate services, which increases both physical and cognitive workload. In this paper, we propose ProAgent, the first end-to-end proactive agent system that harnesses massive sensory contexts and LLM reasoning to deliver proactive assistance. ProAgent first employs a proactive-oriented context extraction approach with on-demand tiered perception to continuously sense the environment and derive hierarchical contexts that incorporate both sensory and persona cues. ProAgent then adopts a context-aware proactive reasoner to map these contexts to user needs and tool calls, providing proactive assistance. We implement ProAgent on Augmented Reality (AR) glasses with an edge server and extensively evaluate it on a real-world testbed, a public dataset, and through a user study. Results show that ProAgent achieves up to 33.4% higher proactive prediction accuracy, 16.8% higher tool-calling F1 score, and notable improvements in user satisfaction over state-of-the-art baselines, marking a significant step toward proactive assistants. A video demonstration of ProAgent is available at https://youtu.be/pRXZuzvrcVs.
- North America > United States > Florida > Hillsborough County > University (0.04)
- Asia > China > Hong Kong (0.04)
- North America > United States > California > San Francisco County > San Francisco (0.04)
- Asia > China > Jiangsu Province > Yancheng (0.04)
- Health & Medicine (1.00)
- Information Technology > Security & Privacy (0.46)
MANTRA: a Framework for Multi-stage Adaptive Noise TReAtment During Training
Zhao, Zixiao, Fard, Fatemeh H., Wu, Jie JW
The reliable application of deep learning models to software engineering tasks hinges on high-quality training data. Yet, large-scale repositories inevitably introduce noisy or mislabeled examples that degrade both accuracy and robustness. While Noise Label Learning (NLL) has been extensively studied in other fields, there are a few works that investigate NLL in Software Engineering (SE) and Large Language Models (LLMs) for SE tasks. In this work, we propose MANTRA, a Multi-stage Adaptive Noise TReAtment framework that embeds noise diagnosis and mitigation directly into the fine-tuning process of code-Pretrained Language Models (PTM) and code-LLMs. We first investigate the effect of noise at varying levels on convergence and loss trajectories of the models. Then we apply an adaptive dropout strategy guided by per-sample loss dynamics and Gaussian Mixture Model clustering to exclude persistently noisy points while preserving clean data. Applying to code summarization and commit intent classification, our experiments reveal that some LLMs are more sensitive to noise than others. However, with MANTRA, the performance of all models in both tasks is improved. MANTRA enables researchers and practitioners to reduce the impact of errors introduced by the dataset in training, saves time in data cleaning and processing, while maximizing the effect of fine-tuning.
- North America > United States > Michigan (0.04)
- North America > United States > New York > New York County > New York City (0.04)
- North America > Canada > British Columbia > Regional District of Central Okanagan > Kelowna (0.04)
- (7 more...)